24 research outputs found
Generalized linear mixing model accounting for endmember variability
Endmember variability is an important factor for accurately unveiling vital
information relating the pure materials and their distribution in hyperspectral
images. Recently, the extended linear mixing model (ELMM) has been proposed as
a modification of the linear mixing model (LMM) to consider endmember
variability effects resulting mainly from illumination changes. In this paper,
we further generalize the ELMM leading to a new model (GLMM) to account for
more complex spectral distortions where different wavelength intervals can be
affected unevenly. We also extend the existing methodology to jointly estimate
the variability and the abundances for the GLMM. Simulations with real and
synthetic data show that the unmixing process can benefit from the extra
flexibility introduced by the GLMM
Stochastic Analysis of the LMS Algorithm for System Identification with Subspace Inputs
This paper studies the behavior of the low rank LMS adaptive algorithm for the general case in which the input transformation may not capture the exact input subspace. It is shown that the Independence Theory and the independent additive noise model are not applicable to this case. A new theoretical model for the weight mean and fluctuation behaviors is developed which incorporates the correlation between successive data vectors (as opposed to the Independence Theory model). The new theory is applied to a network echo cancellation scheme which uses partial-Haar input vector transformations. Comparison of the new model predictions with Monte Carlo simulations shows good-to-excellent agreement, certainly much better than predicted by the Independence Theory based model available in the literature
Stochastic analysis of an error power ratio scheme applied to the affine combination of two LMS adaptive filters
The affine combination of two adaptive filters that simultaneously adapt on the same inputs has been actively investigated. In these structures, the filter outputs are linearly combined to yield a performance that is better than that of either filter. Various decision rules can be used to determine the time-varying parameter for combining the filter outputs. A recently proposed scheme based on the ratio of error powers of the two filters has been shown by simulation to achieve nearly optimum performance. The purpose of this paper is to present a first analysis of the statistical behavior of this error power scheme for white Gaussian inputs. Expressions are derived for the mean behavior of the combination parameter and for the adaptive weight mean-square deviation. Monte Carlo simulations show good to excellent agreement with the theoretical predictions
Echo Cancellation : the generalized likelihood ratio test for double-talk vs. channel change
Echo cancellers are required in both electrical (impedance mismatch) and acoustic (speaker-microphone coupling) applications. One of the main design problems is the control logic for adaptation. Basically, the algorithm weights should be frozen in the presence of double-talk and adapt quickly in the absence of double-talk. The optimum likelihood ratio test (LRT) for this problem was studied in a recent paper. The LRT requires a priori knowledge of the background noise and double-talk power levels. Instead, this paper derives a generalized log likelihood ratio test (GLRT) that does not require this knowledge. The probability density function of a sufficient statistic under each hypothesis is obtained and the performance of the test is evaluated as a function of the system parameters. The receiver operating characteristics (ROCs) indicate that it is difficult to correctly decide between double-talk and a channel change, based upon a single look. However, detection based on about 200 successive samples yields a detection probability close to unity (0.99) with a small false alarm probability (0.01) for the theoretical GLRT model. Application of a GLRT-based echo canceller (EC) to real voice data shows comparable performance to that of the LRT-based EC given in a recent paper
An affine combination of two LMS adaptive filters - Transient mean-square analysis
This paper studies the statistical behavior of an affine combination of the outputs of two LMS adaptive filters that simultaneously adapt using the same white Gaussian inputs. The purpose of the combination is to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD). The linear combination studied is a generalization of the convex combination, in which the combination factor is restricted to the interval . The viewpoint is taken that each of the two filters produces dependent estimates of the unknown channel. Thus, there exists a sequence of optimal affine combining coefficients which minimizes the MSE. First, the optimal unrealizable affine combiner is studied and provides the best possible performance for this class. Then two new schemes are proposed for practical applications. The mean-square performances are analyzed and validated by Monte Carlo simulations. With proper design, the two practical schemes yield an overall MSD that is usually less than the MSD's of either filter
Stochastic Behavior Analysis of the Gaussian Kernel Least-Mean-Square Algorithm
The kernel least-mean-square (KLMS) algorithm is a popular algorithm in nonlinear adaptive filtering due to its
simplicity and robustness. In kernel adaptive filters, the statistics of the input to the linear filter depends on the parameters of the kernel employed. Moreover, practical implementations require a finite nonlinearity model order. A Gaussian KLMS has two design parameters, the step size and the Gaussian kernel bandwidth. Thus, its design requires analytical models for the algorithm behavior as a function of these two parameters. This paper studies the steady-state behavior and the transient behavior of the
Gaussian KLMS algorithm for Gaussian inputs and a finite order nonlinearity model. In particular, we derive recursive expressions for the mean-weight-error vector and the mean-square-error. The model predictions show excellent agreement with Monte Carlo simulations in transient and steady state. This allows the explicit analytical determination of stability limits, and gives opportunity
to choose the algorithm parameters a priori in order to achieve prescribed convergence speed and quality of the estimate. Design examples are presented which validate the theoretical analysis and illustrates its application
A Low-rank Tensor Regularization Strategy for Hyperspectral Unmixing
Tensor-based methods have recently emerged as a more natural and effective
formulation to address many problems in hyperspectral imaging. In hyperspectral
unmixing (HU), low-rank constraints on the abundance maps have been shown to
act as a regularization which adequately accounts for the multidimensional
structure of the underlying signal. However, imposing a strict low-rank
constraint for the abundance maps does not seem to be adequate, as important
information that may be required to represent fine scale abundance behavior may
be discarded. This paper introduces a new low-rank tensor regularization that
adequately captures the low-rank structure underlying the abundance maps
without hindering the flexibility of the solution. Simulation results with
synthetic and real data show that the the extra flexibility introduced by the
proposed regularization significantly improves the unmixing results
Super-Resolution for Hyperspectral and Multispectral Image Fusion Accounting for Seasonal Spectral Variability
Image fusion combines data from different heterogeneous sources to obtain
more precise information about an underlying scene. Hyperspectral-multispectral
(HS-MS) image fusion is currently attracting great interest in remote sensing
since it allows the generation of high spatial resolution HS images,
circumventing the main limitation of this imaging modality. Existing HS-MS
fusion algorithms, however, neglect the spectral variability often existing
between images acquired at different time instants. This time difference causes
variations in spectral signatures of the underlying constituent materials due
to different acquisition and seasonal conditions. This paper introduces a novel
HS-MS image fusion strategy that combines an unmixing-based formulation with an
explicit parametric model for typical spectral variability between the two
images. Simulations with synthetic and real data show that the proposed
strategy leads to a significant performance improvement under spectral
variability and state-of-the-art performance otherwise